12. SVM Image Classification
SVM Image Classification
You've now looked at how an SVM can be used to classify multi-class datasets, but only with two features describing each element. With your point cloud data, you'll have a rich feature set containing color and surface normal histograms. Classification with a rich feature set works just the same as with two features, but it's harder to visualize, so we'll just learn through the example of image classification using color histograms.
To demonstrate image classification, we'll borrow an exercise from the Self-Driving Car Nanodegree Program! In this exercise, the dataset is composed of hundreds of images of cars and other images which you might find in the scene with a car, but are something else. Your goal is to train an SVM to recognize whether an image contains a car or not based on an input feature vector composed of color histograms. Here we'll introduce a few more concepts related to preparing your training data and evaluating the performance of your classifier.
To begin with, you'll read in our car and non-car images, extract the color features for each, then scale the feature vectors to zero mean and unit variance.
After that you'll define a labels vector, shuffle and split the data into training and testing sets, and finally, define a classifier and train it!
Your labels vector y
in this case will just be a binary vector indicating whether each feature vector in your dataset corresponds to a car or non-car (1's for cars, 0's for non-cars). Here we provide a function called extract_features()
that will call the color_hist()
function you defined in a previous exercise and generate a list of features from the image dataset.
# Define a function to extract features from a list of images
# Have this function call color_hist()
def extract_features(imgs, hist_bins=32, hist_range=(0, 256)):
# Create a list to append feature vectors to
features = []
# Iterate through the list of images
for file in imgs:
# Read in each one by one
image = mpimg.imread(file)
# Apply color_hist()
hist_features = color_hist(image, nbins=hist_bins, bins_range=hist_range)
# Append the new feature vector to the features list
features.append(hist_features)
# Return list of feature vectors
return features
Given lists of car and non-car features we can define a labels vector (just a bunch of ones and zeros) like this:
import numpy as np
# Define a labels vector based on features lists
y = np.hstack((np.ones(len(car_features)),
np.zeros(len(notcar_features))))
Next, we'll stack and scale our feature vectors. Stacking into a single array is done in order to get it into the format expected by sklearn
. Scaling is a more subtle issue. In the stacked array each feature will occupy a column. When some features are much larger in magnitude than others, it can lead to poor performance of your classifier. So, it's always a good idea to perform a per-column normalization to make sure all of your features are roughly the same scale (here we're scaling to zero mean and unit variance).
from sklearn.preprocessing import StandardScaler
# Create an array stack of feature vectors
X = np.vstack((car_features, notcar_features)).astype(np.float64)
# Fit a per-column scaler
X_scaler = StandardScaler().fit(X)
# Apply the scaler to X
scaled_X = X_scaler.transform(X)
And now we're ready to shuffle and split the data into training and testing sets. It's always a good idea to test your classifier on a separate dataset from the one you trained on, but first you should always randomize (shuffle) the data. This ensures that any ordering of your data (like for example, a bunch of red cars all in the beginning of the dataset and blue cars at the end) doesn't affect the training of your classifier.
To do this we'll use the Scikit-Learn train_test_split()
function, but it's worth noting that recently, this function moved from the sklearn.cross_validation
package (in sklearn
version <=0.17) to the sklearn.model_selection
package (in sklearn
version >=0.18).
In the quiz editor we're still running sklearn
v0.17, so we'll import it like this:
from sklearn.cross_validation import train_test_split
# But, if you are using scikit-learn >= 0.18 then use this:
# from sklearn.model_selection import train_test_split
train_test_split()
performs both the shuffle and split of the data and you'll call it like this (here choosing to initialize the shuffle with a different random state each time):
# Split up data into randomized training and test sets
rand_state = np.random.randint(0, 100)
X_train, X_test, y_train, y_test = train_test_split(
scaled_X, y, test_size=0.2, random_state=rand_state)
Now, you're ready to define and train a classifier! Here we'll use the same SVC with a linear kernel. To define and train your classifier it takes just a few lines of code:
from sklearn.svm import LinearSVC
# Use a linear SVC (support vector classifier)
svc = SVC(kernel='linear')
# Train the SVC
svc.fit(X_train, y_train)
Then you can check the accuracy of your classifier on the test dataset like this:
print('Test Accuracy of SVC = ', svc.score(X_test, y_test))
Or you can make predictions on a subset of the test data and compare directly with ground truth:
print('My SVC predicts: ', svc.predict(X_test[0:10].reshape(1, -1)))
print('For labels: ', y_test[0:10])
Play with the parameter value histbin
in the exercise below to see how the classifier accuracy and training time vary with the feature vector input.
Start Quiz:
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
import cv2
import glob
import time
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
# NOTE: the next import is only valid
# for scikit-learn version <= 0.17
# if you are using scikit-learn >= 0.18 then use this:
# from sklearn.model_selection import train_test_split
from sklearn.cross_validation import train_test_split
# Define a function to compute color histogram features
def color_hist(img, nbins=32, bins_range=(0, 256)):
# Convert from RGB to HSV using cv2.cvtColor()
hsv_img = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
# Compute the histogram of the HSV channels separately
h_hist = np.histogram(hsv_img[:,:,0], bins=nbins, range=bins_range)
s_hist = np.histogram(hsv_img[:,:,1], bins=nbins, range=bins_range)
v_hist = np.histogram(hsv_img[:,:,2], bins=nbins, range=bins_range)
# Concatenate the histograms into a single feature vector
hist_features = np.concatenate((h_hist[0], s_hist[0], v_hist[0])).astype(np.float64)
# Normalize the result
norm_features = hist_features / np.sum(hist_features)
# Return the feature vector
return norm_features
# Define a function to extract features from a list of images
# Have this function call color_hist()
def extract_features(imgs, hist_bins=32, hist_range=(0, 256)):
# Create a list to append feature vectors to
features = []
# Iterate through the list of images
for file in imgs:
# Read in each one by one
image = mpimg.imread(file)
# Apply color_hist()
hist_features = color_hist(image, nbins=hist_bins, bins_range=hist_range)
# Append the new feature vector to the features list
features.append(hist_features)
# Return list of feature vectors
return features
# Read in car and non-car images
images = glob.glob('*.jpeg')
cars = []
notcars = []
for image in images:
if 'image' in image or 'extra' in image:
notcars.append(image)
else:
cars.append(image)
# TODO play with this value to see how your classifier
# performs under different binning scenarios
histbin = 32
car_features = extract_features(cars, hist_bins=histbin, hist_range=(0, 256))
notcar_features = extract_features(notcars, hist_bins=histbin, hist_range=(0, 256))
# Create an array stack of feature vectors
X = np.vstack((car_features, notcar_features)).astype(np.float64)
# Fit a per-column scaler
X_scaler = StandardScaler().fit(X)
# Apply the scaler to X
scaled_X = X_scaler.transform(X)
# Define the labels vector
y = np.hstack((np.ones(len(car_features)), np.zeros(len(notcar_features))))
# Split up data into randomized training and test sets
rand_state = np.random.randint(0, 100)
X_train, X_test, y_train, y_test = train_test_split(
scaled_X, y, test_size=0.2, random_state=rand_state)
print('Dataset includes', len(cars), 'cars and', len(notcars), 'not-cars')
print('Using', histbin,'histogram bins')
print('Feature vector length:', len(X_train[0]))
# Use a linear SVC
svc = SVC(kernel='linear')
# Check the training time for the SVC
t=time.time()
svc.fit(X_train, y_train)
t2 = time.time()
print(round(t2-t, 2), 'Seconds to train SVC...')
# Check the score of the SVC
print('Test Accuracy of SVC = ', round(svc.score(X_test, y_test), 4))
# Check the prediction time for a single sample
t=time.time()
n_predict = 10
print('My SVC predicts: ', svc.predict(X_test[0:n_predict]))
print('For these',n_predict, 'labels: ', y_test[0:n_predict])
t2 = time.time()
print(round(t2-t, 5), 'Seconds to predict', n_predict,'labels with SVC')
If you'd like to work or experiment with the quiz's dataset, locally, then you can download it from the following sources -